Goto

Collaborating Authors

 spread disinformation


OpenAI says Russian and Israeli groups used its tools to spread disinformation

The Guardian

OpenAI on Thursday released its first ever report on how its artificial intelligence tools are being used for covert influence operations, revealing that the company had disrupted disinformation campaigns originating from Russia, China, Israel and Iran. Malicious actors used the company's generative AI models to create and post propaganda content across social media platforms, and to translate their content into different languages. None of the campaigns gained traction or reached large audiences, according to the report. As generative AI has become a booming industry, there has been widespread concern among researchers and lawmakers over its potential for increasing the quantity and quality of online disinformation. Artificial intelligence companies such as OpenAI, which makes ChatGPT, have tried with mixed results to assuage these concerns and place guardrails on their technology.


Sam Altman's World Tour Hopes to Reassure AI Doomers

WIRED

The excitement around the London arrival of OpenAI CEO Sam Altman was palpable from the queue that snaked its way around the University College London building ahead of his speech on Wednesday afternoon. Hundreds of eager-faced students and admirers of OpenAI's chatbot ChatGPT had come here to watch the UK leg of Altman's world tour, where he expects to travel to around 17 cities. This week, he has already visited Paris and Warsaw. Last week he was in Lagos. But the queue was soundtracked by a small group of people who had traveled to loudly express their anxiety that AI is advancing too fast.


How Deepfake Videos Are Used to Spread Disinformation - The New York Times

#artificialintelligence

Their voices were stilted and failed to sync with the movement of their mouths. Their faces had a pixelated, video-game quality and their hair appeared unnaturally plastered to the head. The captions were filled with grammatical mistakes. The two broadcasters, purportedly anchors for a news outlet called Wolf News, are not real people. They are computer-generated avatars created by artificial intelligence software.


Disinformation

#artificialintelligence

Over the last four weeks, we have explored the use of Artificial Intelligence (AI) in real-world scenarios where automation can provide positive support to those in need and economy as a whole. However, AI can also be misused when in the hands of ill-intentioned people engaged in the global power-play. One of its manifestations is the increased weaponization of AI for the purposes of destabilizing the power balance. This presents a complex challenge for national governments as well as a growing threat to the global security of humanity. It is therefore in this context that this week's blog explores the positive and negative usage of Artificial Intelligence.


Don't underestimate the cheapfake

MIT Technology Review

On November 30, Chinese foreign ministry spokesman Lijian Zhao pinned an image to his Twitter profile. In it, a soldier stands on an Australian flag and grins maniacally as he holds a bloodied knife to a boy's throat. The boy, whose face is covered by a semi-transparent veil, carries a lamb. Alongside the image, Zhao tweeted, "Shocked by murder of Afghan civilians & prisoners by Australian soldiers. We strongly condemn such acts, & call [sic] for holding them accountable."


How A.I. Could Be Weaponized to Spread Disinformation

#artificialintelligence

Tech giants like Facebook and governments around the world are struggling to deal with disinformation, from misleading posts about vaccines to incitement of sectarian violence. As artificial intelligence becomes more powerful, experts worry that disinformation generated by A.I. could make an already complex problem bigger and even more difficult to solve. In recent months, two prominent labs -- OpenAI in San Francisco and the Allen Institute for Artificial Intelligence in Seattle -- have built particularly powerful examples of this technology. Both have warned that it could become increasingly dangerous. Alec Radford, a researcher at OpenAI, argued that this technology could help governments, companies and other organizations spread disinformation far more efficiently: Rather than hire human workers to write and distribute propaganda, these organizations could lean on machines to compose believable and varied content at tremendous scale.